随着移动平台上对计算摄影和成像的需求不断增长,在相机系统中开发和集成了高级图像传感器与新型算法的发展。但是,缺乏用于研究的高质量数据以及从行业和学术界进行深入交流的难得的机会限制了移动智能摄影和成像(MIPI)的发展。为了弥合差距,我们介绍了第一个MIPI挑战,包括五个曲目,这些曲目着重于新型图像传感器和成像算法。在本文中,引入了RGBW关节Remosaic和Denoise,这是五个曲目之一,在全面分辨率上进行了RGBW CFA插值的插值。为参与者提供了一个新的数据集,其中包括70(培训)和15个(验证)高质量RGBW和拜耳对的场景。此外,对于每个场景,在0dB,24dB和42dB上提供了不同噪声水平的RGBW。所有数据均在室外和室内条件下使用RGBW传感器捕获。最终结果是使用PSNR,SSIM,LPIPS和KLD在内的客观指标评估的。本文提供了此挑战中所有模型的详细描述。有关此挑战的更多详细信息以及数据集的链接,请访问https://github.com/mipi-challenge/mipi2022。
translated by 谷歌翻译
随着移动平台上对计算摄影和成像的需求不断增长,在相机系统中开发和集成了高级图像传感器与新型算法的发展。但是,缺乏用于研究的高质量数据以及从行业和学术界进行深入交流的难得的机会限制了移动智能摄影和成像(MIPI)的发展。为了弥合差距,我们引入了第一个MIPI挑战,其中包括五个专注于新型图像传感器和成像算法的曲目。在本文中,引入了RGBW关节融合和Denoise,这是五个曲目之一,其中一条致力于将Binning模式RGBW融合到拜耳。为参与者提供了一个新的数据集,其中包括70(培训)和15个(验证)高质量RGBW和拜耳对的场景。此外,对于每个场景,在24dB和42dB处提供不同噪声水平的RGBW。所有数据均在室外和室内条件下使用RGBW传感器捕获。最终结果使用客观指标,包括PSNR,SSIM},LPIPS和KLD评估。本文提供了此挑战中所有模型的详细描述。有关此挑战的更多详细信息以及数据集的链接,请访问https://github.com/mipi-challenge/mipi2022。
translated by 谷歌翻译
随着移动平台上对计算摄影和成像的需求不断增长,在相机系统中开发和集成了高级图像传感器与新型算法的发展。但是,缺乏用于研究的高质量数据以及从行业和学术界进行深入交流的难得的机会限制了移动智能摄影和成像(MIPI)的发展。为了弥合差距,我们引入了第一个MIPI挑战,其中包括五个专注于新型图像传感器和成像算法的曲目。在本文中,引入了QUAD Remosaic和Denoise,这是五个曲目之一,在完全分辨率上进行了四QFA插值向拜耳进行插值。为参与者提供了一个新的数据集,包括70(培训)和15个(验证)高品质四边形和拜耳对的场景。此外,对于每个场景,在0dB,24dB和42dB上提供了不同噪声水平的四边形。所有数据均在室外和室内条件下使用四边形传感器捕获。最终结果使用客观指标,包括PSNR,SSIM,LPIPS和KLD。本文提供了此挑战中所有模型的详细描述。有关此挑战的更多详细信息以及数据集的链接,请访问https://github.com/mipi-challenge/mipi2022。
translated by 谷歌翻译
随着对移动平台上对计算摄影和成像的需求不断增长,在相机系统中开发和集成了高级图像传感器与相机系统中新型算法。但是,缺乏用于研究的高质量数据以及从行业和学术界进行深入交流的难得的机会限制了移动智能摄影和成像(MIPI)的发展。为了弥合差距,我们介绍了第一个MIPI挑战,包括五个曲目,这些曲目着重于新型图像传感器和成像算法。在本文中,引入了RGB+TOF深度完成,这是五个曲目之一,其中一条介绍了RGB传感器和TOF传感器(带有点照明)的融合。为参与者提供了一个名为TetrasRGBD的新数据集,其中包含18k对高质量合成RGB+DEPTH训练数据和2.3k对来自混合源的测试数据。所有数据均在室内场景中收集。我们要求所有方法的运行时间都应在桌面GPU上实时。最终结果是使用客观指标和平均意见评分(MOS)主观评估的。本文提供了此挑战中所有模型的详细描述。有关此挑战的更多详细信息以及数据集的链接,请访问https://github.com/mipi-challenge/mipi2022。
translated by 谷歌翻译
随着移动平台上对计算摄影和成像的需求不断增长,在相机系统中开发和集成了高级图像传感器与新型算法的发展。但是,缺乏用于研究的高质量数据以及从行业和学术界进行深入交流的难得的机会限制了移动智能摄影和成像(MIPI)的发展。为了弥合差距,我们介绍了第一个MIPI挑战,包括五个曲目,这些曲目着重于新型图像传感器和成像算法。在本文中,我们总结并审查了MIPI 2022上的分配摄像头(UDC)图像恢复轨道。总共,成功注册了167名参与者,并在最终测试阶段提交了19个团队。在这项挑战中开发的解决方案在播放摄像头映像修复局上实现了最新的性能。本文提供了此挑战中所有模型的详细描述。有关此挑战的更多详细信息以及数据集的链接,请访问https://github.com/mipi-challenge/mipi2022。
translated by 谷歌翻译
Current advances in recommender systems have been remarkably successful in optimizing immediate engagement. However, long-term user engagement, a more desirable performance metric, remains difficult to improve. Meanwhile, recent reinforcement learning (RL) algorithms have shown their effectiveness in a variety of long-term goal optimization tasks. For this reason, RL is widely considered as a promising framework for optimizing long-term user engagement in recommendation. Despite being a promising approach, the application of RL heavily relies on well-designed rewards, but designing rewards related to long-term user engagement is quite difficult. To mitigate the problem, we propose a novel paradigm, Preference-based Recommender systems (PrefRec), which allows RL recommender systems to learn from preferences about users' historical behaviors rather than explicitly defined rewards. Such preferences are easily accessible through techniques such as crowdsourcing, as they do not require any expert knowledge. With PrefRec, we can fully exploit the advantages of RL in optimizing long-term goals, while avoiding complex reward engineering. PrefRec uses the preferences to automatically train a reward function in an end-to-end manner. The reward function is then used to generate learning signals to train the recommendation policy. Furthermore, we design an effective optimization method for PrefRec, which uses an additional value function, expectile regression and reward model pre-training to improve the performance. Extensive experiments are conducted on a variety of long-term user engagement optimization tasks. The results show that PrefRec significantly outperforms previous state-of-the-art methods in all the tasks.
translated by 谷歌翻译
基于文本的图像标题(TextCAP)需要同时对视觉内容的理解并读取图像文本以生成自然语言描述。虽然一项任务可以教导机器来了解复杂的人类环境进一步鉴于我们日常环境中的文本是全部的,但它在正常标题中提出了额外的挑战。基于文本的图像直观地包含丰富和复杂的多模式关系内容,即可以从多视图而不是单个字幕来扩散图像细节。当然,我们可以介绍额外的配对训练数据以显示图像描述的多样性,这一过程是具有额外文本的文本映射对注释的劳动密集型和耗时。基于上述洞察力,我们调查如何使用未配对的培训范例来生成专注于不同图像零件的不同标题。我们提出了多模式关系图对抗性推论(魔法)框架,用于多样化和未配对的Textcap。该框架可以自适应地构建图形之间的图像和模型复杂关系的多个多模式关系图来表示描述性分集。此外,从建模的图表中开发了一种级联的生成对抗性网络,以推断图像句子特征对齐和语言相干水平中的未配对字幕。我们验证了魔法在从图像的不同关系信息项目生成不同标题时的有效性。实验结果表明,魔法可以在不使用任何图像标题训练对的情况下产生非常有前途的结果。
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译
To generate high quality rendering images for real time applications, it is often to trace only a few samples-per-pixel (spp) at a lower resolution and then supersample to the high resolution. Based on the observation that the rendered pixels at a low resolution are typically highly aliased, we present a novel method for neural supersampling based on ray tracing 1/4-spp samples at the high resolution. Our key insight is that the ray-traced samples at the target resolution are accurate and reliable, which makes the supersampling an interpolation problem. We present a mask-reinforced neural network to reconstruct and interpolate high-quality image sequences. First, a novel temporal accumulation network is introduced to compute the correlation between current and previous features to significantly improve their temporal stability. Then a reconstruct network based on a multi-scale U-Net with skip connections is adopted for reconstruction and generation of the desired high-resolution image. Experimental results and comparisons have shown that our proposed method can generate higher quality results of supersampling, without increasing the total number of ray-tracing samples, over current state-of-the-art methods.
translated by 谷歌翻译
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译